# Open-source Large Model
SWE Agent LM 32B GGUF
Apache-2.0
SWE-agent-LM-32B is an open-source software engineering language model, fine-tuned based on Qwen/Qwen2.5-Coder-32B-Instruct, and is specifically designed for software engineering tasks.
Large Language Model
Transformers English

S
Mungert
2,933
1
Dots.llm1.inst
MIT
dots.llm1 is a large-scale MoE model that activates 14 billion parameters out of a total of 142 billion parameters, and its performance is comparable to that of the state-of-the-art models.
Large Language Model
Transformers Supports Multiple Languages

D
rednote-hilab
440
97
Gemma 3 4b It Q8 0 GGUF
This is the GGUF quantized version of Google Gemma 3B model, suitable for local deployment and inference.
Large Language Model
G
NikolayKozloff
56
2
Gemma 3 12b It Q5 K S GGUF
This is the GGUF quantized version of Google Gemma 3B model, suitable for local inference and supports text generation tasks.
Large Language Model
G
NikolayKozloff
16
1
Deepseek R1 Distill Qwen 32B Quantized.w8a8
MIT
Quantized version of DeepSeek-R1-Distill-Qwen-32B, reducing memory requirements and improving computational efficiency through INT8 weight quantization and activation quantization
Large Language Model
Transformers

D
RedHatAI
3,572
11
Reflection Llama 3.1 70B
Reflection Llama-3.1 70B is an open-source large language model trained with 'reflection tuning' technology, capable of autonomously detecting reasoning errors and correcting its approach.
Large Language Model
Transformers

R
mattshumer
199
1,712
Tarsier 7b
Tarsier-7b is an open-source large-scale video-language model from the Tarsier series, specializing in generating high-quality video descriptions with excellent general video understanding capabilities.
Video-to-Text
Transformers

T
omni-research
635
23
Wizardlaker 7B
Apache-2.0
Wizard Lake 7B is a fusion model combining the next-generation WizardLM 2 7B model with a customized DolphinLake model, delivering outstanding performance.
Large Language Model
Transformers

W
Noodlz
22
2
Microsoft WizardLM 2 7B
Apache-2.0
WizardLM-2 7B is a highly efficient large language model developed by Microsoft AI team, based on the Mistral-7B architecture, excelling in multilingual, reasoning, and agent tasks.
Large Language Model
Transformers

M
lucyknada
168
51
Cogvlm Grounding Generalist Hf Quant4
Apache-2.0
CogVLM is a powerful open-source vision-language model supporting tasks like object detection and visual question answering, featuring 4-bit precision quantization.
Image-to-Text
Transformers

C
Rodeszones
50
9
Aya 101
Apache-2.0
Aya 101 is a large-scale multilingual generative language model supporting instructions in 101 languages, outperforming similar models in various evaluations.
Large Language Model
Transformers Supports Multiple Languages

A
CohereLabs
3,468
640
Emollama Chat 7b
MIT
Emollama-chat-7b is part of the EmoLLMs project and is the first open-source large language model series with instruction-following capability, focusing on comprehensive sentiment analysis.
Large Language Model
Transformers English

E
lzw1008
281
4
Supermario V2
Apache-2.0
supermario-v2 is a merged model based on Mistral-7B-v0.1, combining three different models using the DARE_TIES method, with strong text generation capabilities.
Large Language Model
Transformers English

S
jan-hq
77
8
Mixtral 8x7B Instruct V0.1
Apache-2.0
Mixtral-8x7B is a pre-trained generative sparse mixture of experts model that outperforms Llama 2 70B on most benchmarks.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
505.97k
4,397
Yi 34B Chat
Apache-2.0
Yi-34B-Chat is a bilingual-optimized large language model developed by 01.AI, excelling in language understanding, commonsense reasoning, and reading comprehension, supporting both Chinese and English interactions.
Large Language Model
Transformers

Y
01-ai
5,784
350
Koopenchat Sft
koOpenChat-sft is a Korean dialogue model optimized based on OpenChat3.5, supporting ChatML and Alpaca format instruction interactions.
Large Language Model
Transformers

K
maywell
1,836
7
Redpajama INCITE 7B Chat
Apache-2.0
A 6.9 billion parameter dialogue-specific language model developed by Together in collaboration with multiple AI research institutions, trained on the RedPajama-Data-1T dataset and enhanced with dialogue capabilities through fine-tuning with OASST1 and Dolly2 data
Large Language Model
Transformers English

R
togethercomputer
178
93
Flan T5 Large
Apache-2.0
FLAN-T5 is an instruction-fine-tuned language model based on T5, supporting 60+ languages, achieving stronger performance through fine-tuning on 1000+ tasks with the same parameter count
Large Language Model Supports Multiple Languages
F
google
589.25k
749
Featured Recommended AI Models